generative ai model
A defense official reveals how AI chatbots could be used for targeting decisions
Though the US military's big data initiative Maven has sped up the planning of strikes for years, the comments suggest that generative AI is now adding a new interpretative layer to such deliberations. The US military might use generative AI systems to rank lists of targets and make recommendations--which would be vetted by humans--about which to strike first, according to a Defense Department official with knowledge of the matter. The disclosure about how the military may use AI chatbots comes as the Pentagon faces scrutiny over a strike on an Iranian school, which it is still investigating. A list of possible targets might be fed into a generative AI system that the Pentagon is fielding for classified settings. Then, said the official, who requested to speak on background with to discuss sensitive topics, humans might ask the system to analyze the information and prioritize the targets while accounting for factors like where aircraft are currently located. Humans would then be responsible for checking and evaluating the results and recommendations.
- South America > Venezuela (0.15)
- Asia > Middle East > Iran (0.07)
- North America > United States > Massachusetts (0.05)
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.72)
Robot, make me a chair
"Robot, make me a chair" Computer-aided design (CAD) systems are tried-and-true tools used to design many of the physical objects we use each day. But CAD software requires extensive expertise to master, and many tools incorporate such a high level of detail they don't lend themselves to brainstorming or rapid prototyping. In an effort to make design faster and more accessible for non-experts, researchers from MIT and elsewhere developed an AI-driven robotic assembly system that allows people to build physical objects by simply describing them in words. Their system uses a generative AI model to build a 3D representation of an object's geometry based on the user's prompt. Then, a second generative AI model reasons about the desired object and figures out where different components should go, according to the object's function and geometry.
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.93)
- Information Technology > Artificial Intelligence > Vision (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.49)
EU opens investigation into Google's use of online content for AI models
Google runs the Gemini AI model and is owned by Alphabet. Google runs the Gemini AI model and is owned by Alphabet. EU opens investigation into Google's use of online content for AI models Tue 9 Dec 2025 05.06 ESTFirst published on Tue 9 Dec 2025 03.48 EST The EU has opened an investigation to assess whether Google is breaching European competition rules in its use of online content from publishers and YouTube creators for artificial intelligence. The European Commission said on Tuesday it will examine whether the US tech company, which runs the Gemini AI model and is owned by Alphabet, is putting rival AI owners at a "disadvantage". "The investigation will notably examine whether Google is distorting competition by imposing unfair terms and conditions on publishers and content creators, or by granting itself privileged access to such content, thereby placing developers of rival AI models at a disadvantage," the commission said.
- North America > United States (0.71)
- Oceania > Australia (0.06)
- Asia (0.05)
- (3 more...)
Can generative AI figure out figurative language? The influence of idioms on essay scoring by ChatGPT, Gemini, and Deepseek
The developments in Generative AI technologies have paved the way for numerous innovations in different fields. Recently, Generative AI has been proposed as a competitor to AES systems in evaluating student essays automatically. Considering the potential limitations of AI in processing idioms, this study assessed the scoring performances of Generative AI models for essays with and without idioms by incorporating insights from Corpus Linguistics and Computational Linguistics. Two equal essay lists were created from 348 student essays taken from a corpus: one with multiple idioms present in each essay and another with no idioms in essays. Three Generative AI models (ChatGPT, Gemini, and Deepseek) were asked to score all essays in both lists three times, using the same rubric used by human raters in assigning essay scores. The results revealed excellent consistency for all models, but Gemini outperformed its competitors in interrater reliability with human raters. There was also no detectable bias for any demographic group in AI assessment. For essays with multiple idioms, Gemini followed a the most similar pattern to human raters. While the models in the study demonstrated potential for a hybrid approach, Gemini was the best candidate for the task due to its ability to handle figurative language and showed promise for handling essay-scoring tasks alone in the future.
- North America > United States (0.14)
- Europe > Austria > Vienna (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > Middle East > Republic of Türkiye (0.04)
- Education > Curriculum > Subject-Specific Education (0.54)
- Education > Assessment & Standards > Student Performance (0.38)
The ML.ENERGY Benchmark: Toward Automated Inference Energy Measurement and Optimization
Chung, Jae-Won, Ma, Jeff J., Wu, Ruofan, Liu, Jiachen, Kweon, Oh Jun, Xia, Yuxuan, Wu, Zhiyu, Chowdhury, Mosharaf
As the adoption of Generative AI in real-world services grow explosively, energy has emerged as a critical bottleneck resource. However, energy remains a metric that is often overlooked, under-explored, or poorly understood in the context of building ML systems. We present the ML$.$ENERGY Benchmark, a benchmark suite and tool for measuring inference energy consumption under realistic service environments, and the corresponding ML$.$ENERGY Leaderboard, which have served as a valuable resource for those hoping to understand and optimize the energy consumption of their generative AI services. In this paper, we explain four key design principles for benchmarking ML energy we have acquired over time, and then describe how they are implemented in the ML$.$ENERGY Benchmark. We then highlight results from the early 2025 iteration of the benchmark, including energy measurements of 40 widely used model architectures across 6 different tasks, case studies of how ML design choices impact energy consumption, and how automated optimization recommendations can lead to significant (sometimes more than 40%) energy savings without changing what is being computed by the model. The ML$.$ENERGY Benchmark is open-source and can be easily extended to various customized models and application scenarios.
- North America > United States > Michigan (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- Energy (1.00)
- Information Technology > Services (0.93)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.95)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.56)
Generative AI model maps how a new antibiotic targets gut bacteria
For patients with inflammatory bowel disease, antibiotics can be a double-edged sword. The broad-spectrum drugs often prescribed for gut flare-ups can kill helpful microbes alongside harmful ones, sometimes worsening symptoms over time. When fighting gut inflammation, you don't always want to bring a sledgehammer to a knife fight. Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and McMaster University have identified a new compound that takes a more targeted approach. The molecule, called enterololin, suppresses a group of bacteria linked to Crohn's disease flare-ups while leaving the rest of the microbiome largely intact.
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.93)
- Information Technology > Artificial Intelligence > Vision (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.85)
What happens when generative AI models train recursively on each others' outputs?
Vu, Hung Anh, Reeves, Galen, Wenger, Emily
The internet serves as a common source of training data for generative AI (genAI) models but is increasingly populated with AI-generated content. This duality raises the possibility that future genAI models may be trained on other models' generated outputs. Prior work has studied consequences of models training on their own generated outputs, but limited work has considered what happens if models ingest content produced by other models. Given society's increasing dependence on genAI tools, understanding such data-mediated model interactions is critical. This work provides empirical evidence for how data-mediated interactions might unfold in practice, develops a theoretical model for this interactive training process, and experimentally validates the theory. We find that data-mediated interactions can benefit models by exposing them to novel concepts perhaps missed in original training data, but also can homogenize their performance on shared tasks.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.49)
Statistical Methods in Generative AI
Artificial Intelligence, and more specifically, Generative AI, is emerging as an important technology. Over the past few years a number of prominent generative AI technologies have been developed and have received widespread attention; ranging from text generation via large language models (ChatGPT, Claude, Llama, Gemini, DeepSeek, Qwen, etc), image generation via diffusion models (Dall-E, Stable Diffusion, etc), to scientific generative AI techniques used for protein generation (e.g., Watson et al. 2023, etc), DNA sequence editing (e.g., Ruffolo et al. 2025, etc), among others. Such methods have been quickly adopted by end users and institutions, both via direct usage, as well as integrated in other tools such as code assistants and web search agents. The scientific community has shown significant interest in using generative AI models, achieving a number of breakthrough results (see e.g., Davies et al. 2021, Hayes et al. 2025, etc), culminating in a 2024 Nobel Prize in Chemistry awarded in part for work with a significant component in protein structure design and generation (The Royal Swedish Academy of Sciences 2024). Yet, the adoption of generative AI (GenAI) methods more generally is hindered by their lack of reliability (see e.g., Farquhar et al. 2024, Strauss et al. 2025, Manduchi et al. 2025, etc).
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.14)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- (5 more...)
- Research Report (1.00)
- Overview (1.00)
- Personal > Honors (0.54)
- Education (0.93)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.86)